Main Page
Welcome to Leeroopedia
Your ML & Data Knowledge Wiki. Best practices and expert-level knowledge for Machine Learning and Data Engineering, covering 1000+ frameworks and libraries from training to deployment.
Browse implementation patterns, configuration guides, debugging heuristics, and battle-tested defaults for frameworks like vLLM, DeepSpeed, Megatron-LM, FlashAttention, Triton, Unsloth, LangChain, and many more. Every page is structured so both humans and AI agents can find what they need fast.
Connect your AI coding agent. Plug Leeroopedia into your favorite coding agent with the Leeroopedia MCP setup guide. Let it search docs, build plans, verify code, and diagnose failures on your behalf.
Go end-to-end. Leeroopedia gives your agent the knowledge. Kapso gives it the ability to act on it: research, experiment, and deploy.
Browse by Category
| Category | Description | Browse |
|---|---|---|
| Workflows | Step-by-step processes and procedures | Browse All |
| Principles | Core ideas and foundational knowledge | Browse All |
| Implementations | Code-level details and modules | Browse All |
| Heuristics | Best practices and guidelines | Browse All |
| Environments | Setup and configuration guides | Browse All |
Explore Pages
Workflows
- Workflow:Groq Groq python Text Embedding
- Workflow:DataTalksClub Data engineering zoomcamp Spark Batch Processing
- Workflow:Openai Openai python Fine Tuning Job Management
- Workflow:Helicone Helicone LLM Request Proxy Logging
- Workflow:Run llama Llama index Evaluation Pipeline
- Workflow:Elevenlabs Elevenlabs python Conversational AI Agent
- Workflow:Haosulab ManiSkill Custom Task Development
- Workflow:Openai Openai node Streaming To Client
- Workflow:Neuml Txtai Model Training
- Workflow:Mlc ai Web llm Structured Output Generation
Principles
- Principle:Heibaiying BigData Notes Storm Parallelism Configuration
- Principle:Neuml Txtai Graph Network
- Principle:CrewAIInc CrewAI Semantic Retrieval
- Principle:Pola rs Polars Python Binding Configuration
- Principle:Huggingface Transformers Environment Setup
- Principle:Ucbepic Docetl Deterministic Code Operations
- Principle:Avhz RustQuant Volatility Surface
- Principle:Openai Evals Eval Resolution
- Principle:MaterializeInc Materialize Composition Service Definition
- Principle:Spcl Graph of thoughts LLM Prompt Generation
Implementations
- Implementation:Google deepmind Mujoco Platform GUI
- Implementation:Vllm project Vllm RequestOutput Access
- Implementation:Datahub project Datahub Ingest CLI Run
- Implementation:Apache Flink HybridSourceSplitEnumerator HandleSourceEvent
- Implementation:Sgl project Sglang CPU Shared Memory
- Implementation:Intel Ipex llm Env Check Linux
- Implementation:Avhz RustQuant LimitOrderBook
- Implementation:Interpretml Interpret ClassHistogram And Marginal
- Implementation:Langchain ai Langgraph Entrypoint Decorator
- Implementation:Huggingface Datasets Extractor
Heuristics
- Heuristic:Testtimescaling Testtimescaling github io Hardcoded IDs vs Registry
- Heuristic:Duckdb Duckdb Version Sync Across Files
- Heuristic:Spotify Luigi Streaming MapReduce Processing
- Heuristic:Bitsandbytes foundation Bitsandbytes Compressed Statistics Double Quantization
- Heuristic:Apache Dolphinscheduler JDBC Security Blocklist
- Heuristic:Obss Sahi Confidence Threshold Setting
- Heuristic:Anthropics Anthropic sdk python Warning Deprecated LegacyAPIResponse
- Heuristic:Helicone Helicone Tiered Pricing Threshold Matching
- Heuristic:PeterL1n BackgroundMattingV2 Checkpoint Interval Tuning
- Heuristic:ContextualAI HALOs Humanline Clamping
Environments
- Environment:Kornia Kornia PyTorch Python Environment
- Environment:Ray project Ray Docker GPU Environment
- Environment:InternLM Lmdeploy Build From Source
- Environment:LLMBook zh LLMBook zh github io VLLM Inference Environment
- Environment:Pytorch Serve DeepSpeed Environment
- Environment:Openai CLIP Python Dependencies
- Environment:ThreeSR Awesome Inference Time Scaling Semantic Scholar API Environment
- Environment:Roboflow Rf detr Roboflow Deployment Credentials
- Environment:Apache Dolphinscheduler Database Backend
- Environment:Microsoft Autogen LLM Provider API Keys